PRISM

Benchmark
Model:resource-gathering v.2 (MDP)
Parameter(s)B = 1300, GOLD_TO_COLLECT = 100, GEM_TO_COLLECT = 100
Property:expsteps (exp-steps)
Invocation (default)
../fix-syntax ../prism -javamaxmem 11g -cuddmaxmem 4g -heuristic speed -e 1e-6 -maxiters 1000000 resource-gathering.pm resource-gathering.prctl --property expsteps -const B=1300,GOLD_TO_COLLECT=100,GEM_TO_COLLECT=100
Execution
Walltime:25.287198781967163s
Return code:0
Relative Error:5.8081603920028655e-06
Log
PRISM
=====

Version: 4.5.dev
Date: Fri Feb 26 16:13:43 CET 2021
Hostname: christopher
Memory limits: cudd=4g, java(heap)=11g
Command line: prism -javamaxmem 11g -cuddmaxmem 4g -heuristic speed -e 1e-6 -maxiters 1000000 resource-gathering.pm_fixed resource-gathering.prctl_fixed --property expsteps -const 'B=1300,GOLD_TO_COLLECT=100,GEM_TO_COLLECT=100'

Parsing model file "resource-gathering.pm_fixed"...

Type:        MDP
Modules:     robot goldcounter gemcounter 
Variables:   gold gem attacked x y required_gold required_gem 

Parsing properties file "resource-gathering.prctl_fixed"...

3 properties:
(1) "expgold": R{"rew_gold"}max=? [ C<=B ]
(2) "expsteps": R{"time_reward"}min=? [ F "success" ]
(3) "prgoldgem": Pmax=? [ F<=B "success" ]

---------------------------------------------------------------------

Model checking: "expsteps": R{"time_reward"}min=? [ F "success" ]
Model constants: GOLD_TO_COLLECT=100,GEM_TO_COLLECT=100,B=1300

Warning: Switching to sparse engine and (backwards) Gauss Seidel (default for heuristic=speed).

Building model...
Model constants: GOLD_TO_COLLECT=100,GEM_TO_COLLECT=100,B=1300

Computing reachable states...

Reachability (BFS): 1215 iterations in 0.49 seconds (average 0.000400, setup 0.00)

Time for model construction: 0.514 seconds.

Type:        MDP
States:      958894 (1 initial)
Transitions: 3325526
Choices:     3080702

Transition matrix: 898 nodes (4 terminal), 3325526 minterms, vars: 23r/23c/4nd

Prob0A: 1203 iterations in 0.74 seconds (average 0.000615, setup 0.00)

Prob1E: 1204 iterations in 0.82 seconds (average 0.000679, setup 0.00)

Warning: PRISM hasn't checked for zero-reward loops.
Your minimum rewards may be too low...

goal = 94, inf = 0, maybe = 958800

Computing remaining rewards...
Engine: Sparse

Building sparse matrix (transitions)... [n=958894, nc=3080400, nnz=3325200, k=4] [53.5 MB]
Building sparse matrix (transition rewards)... [n=958894, nc=3080400, nnz=0, k=4] [15.4 MB]
Creating vector for state rewards... [7.3 MB]
Creating vector for inf... [7.3 MB]
Allocating iteration vectors... [2 x 7.3 MB]
TOTAL: [98.1 MB]

Starting iterations...
Iteration 299: max relative diff=0.003344, 5.01 sec so far
Iteration 599: max relative diff=0.001669, 10.03 sec so far
Iteration 903: max relative diff=0.001107, 15.04 sec so far
Iteration 1214: max relative diff=0.000824, 20.04 sec so far

Iterative method: 1369 iterations in 22.87 seconds (average 0.016431, setup 0.38)

Value in the initial state: 1292.5850850074933

Time for model checking: 24.448 seconds.

Result: 1292.5850850074933 (value in the initial state)


Overall running time: 25.194 seconds.

---------------------------------------------------------------------

Note: There were 2 warnings during computation.